Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
2.
Skeletal Radiol ; 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38270616

RESUMO

OBJECTIVE: To assess the feasibility of using large language models (LLMs), specifically ChatGPT-4, to generate concise and accurate layperson summaries of musculoskeletal radiology reports. METHODS: Sixty radiology reports, comprising 20 MR shoulder, 20 MR knee, and 20 MR lumbar spine reports, were obtained via PACS. The reports were deidentified and then submitted to ChatGPT-4, with the prompt "Produce an organized and concise layperson summary of the findings of the following radiology report. Target a reading level of 8-9th grade and word count <300 words." Three (two primary and one later added for validation) independent readers evaluated the summaries for completeness and accuracy compared to the original reports. Summaries were rated on a scale of 1 to 3: 1) summaries that were incorrect or incomplete, potentially providing harmful or confusing information; 2) summaries that were mostly correct and complete, unlikely to cause confusion or harm; and 3) summaries that were entirely correct and complete. RESULTS: All 60 responses met the criteria for word count and readability. Mean ratings for accuracy were 2.58 for reader 1, 2.71 for reader 2, and 2.77 for reader 3. Mean ratings for completeness were 2.87 for reader 1 and 2.73 for reader 2 and 2.87 for reader 3. For accuracy, reader 1 identified three summaries as a 1, reader 2 identified one, and reader 3 identified none. For the two primary readers, inter-reader agreement was low for accuracy (kappa 0.33) and completeness (kappa 0.29). There were no statistically significant changes in inter-reader agreement when the third reader's ratings were included in analysis. CONCLUSION: Overall ratings for accuracy and completeness of the AI-generated layperson report summaries were high with only a small minority likely to be confusing or inaccurate. This study illustrates the potential for leveraging generative AI, such as ChatGPT-4, to automate the production of patient-friendly summaries for musculoskeletal MR imaging.

3.
Acad Radiol ; 31(1): 338-342, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37709612

RESUMO

RATIONALE AND OBJECTIVES: With recent advancements in the power and accessibility of artificial intelligence (AI) Large Language Models (LLMs) patients might increasingly turn to these platforms to answer questions regarding radiologic examinations and procedures, despite valid concerns about the accuracy of information provided. This study aimed to assess the accuracy and completeness of information provided by the Bing Chatbot-a LLM powered by ChatGPT-on patient education for common radiologic exams. MATERIALS AND METHODS: We selected three common radiologic examinations and procedures: computed tomography (CT) abdomen, magnetic resonance imaging (MRI) spine, and bone biopsy. For each, ten questions were tested on the chatbot in two trials using three different chatbot settings. Two reviewers independently assessed the chatbot's responses for accuracy and completeness compared to an accepted online resource, radiologyinfo.org. RESULTS: Of the 360 reviews performed, 336 (93%) were rated "entirely correct" and 24 (7%) were "mostly correct," indicating a high level of reliability. Completeness ratings showed that 65% were "complete" and 35% were "mostly complete." The "More Creative" chatbot setting produced a higher proportion of responses rated "entirely correct" but there were otherwise no significant difference in ratings based on chatbot settings or exam types. The readability level was rated eighth-grade level. CONCLUSION: The Bing Chatbot provided accurate responses answering all or most aspects of the question asked of it, with responses tending to err on the side of caution for nuanced questions. Importantly, no responses were inaccurate or had potential to cause harm or confusion for the user. Thus, LLM chatbots demonstrate potential to enhance patient education in radiology and could be integrated into patient portals for various purposes, including exam preparation and results interpretation.


Assuntos
Inteligência Artificial , Radiologia , Humanos , Reprodutibilidade dos Testes , Educação de Pacientes como Assunto , Radiografia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...